西尼罗河病毒(WNV)的发生代表了最常见的蚊子传播的人畜共患病毒感染之一。它的循环通常与适合载体增殖和病毒复制的气候和环境条件有关。最重要的是,已经开发了几种统计模型来塑造和预测WNV循环:尤其是,最近的地球观察数据(EO)数据的巨大可用性,再加上人工智能领域的持续发展,提供了宝贵的机会。在本文中,我们试图通过用卫星图像为深度神经网络(DNN)喂食WNV循环,这些图像已被广泛证明可以具有环境和气候特征。值得注意的是,尽管以前的方法可以独立分析每个地理位置,但我们提出了一种空间感知方法,该方法也考虑了近距离位点的特征。具体而言,我们建立在图形神经网络(GNN)的基础上,以从相邻位置进行聚集特征,并进一步扩展这些模块以考虑多个关系,例如两个地点之间的温度和土壤水分差异以及地理距离。此外,我们将与时间相关的信息直接注入模型中,以考虑病毒传播的季节性。我们设计了一个实验环境,将卫星图像(来自Landsat和Sentinel任务)结合在一起,以及意大利WNV循环的地面真相观察。我们表明,与适当的预训练阶段配对时,我们提出的多种jaCencenciencencencence Graph注意网络(MAGAT)始终导致更高的性能。最后,我们在消融研究中评估MAGAT每个组成部分的重要性。
translated by 谷歌翻译
这项工作解决了弱监督的异常检测,其中允许预测指标不仅可以从正常示例中学习,而且还可以从训练期间提供的一些标签异常。特别是,我们处理视频流中异常活动的本地化:这是一个非常具有挑战性的情况,因为培训示例仅带有视频级别的注释(而不是帧级)。最近的几项工作提出了各种正则化术语来解决它,即通过对弱学习的框架级异常得分的稀疏性和平滑度约束。在这项工作中,我们受到自我监督学习领域的最新进展的启发,并要求模型为同一视频序列的不同增强而产生相同的分数。我们表明,执行这种对齐能够提高模型在XD暴力方面的性能。
translated by 谷歌翻译
近年来,机器学习(ML)所证明的权力越来越吸引优化社区的兴趣,该界开始利用ML来增强和自动化最佳和近似算法的设计。 ML解决的一个组合优化问题是车间调度问题(JSP)。重点关注JSP和ML的大多数作品都是基于深入的强化学习(DRL),并且只有少数人利用监督的学习技术。避免有监督学习的反复出现的原因似乎是施放正确的学习任务的困难,即预测的有意义以及如何获得标签。因此,我们首先提出了一项新颖的监督学习任务,旨在预测机器排列的质量。然后,我们设计了一种原始方法来估算这种质量,以创建准确的顺序深度学习模型(二进制精度高于95%)。最后,我们通过凭经验证明了通过提高受文献作品启发的简单禁忌搜索算法的性能来预测机器排列质量的价值。
translated by 谷歌翻译
这项工作调查了持续学习(CL)与转移学习(TL)之间的纠缠。特别是,我们阐明了网络预训练的广泛应用,强调它本身受到灾难性遗忘的影响。不幸的是,这个问题导致在以后任务期间知识转移的解释不足。在此基础上,我们提出了转移而不忘记(TWF),这是在固定的经过预定的兄弟姐妹网络上建立的混合方法,该方法不断传播源域中固有的知识,通过层次损失项。我们的实验表明,TWF在各种设置上稳步优于其他CL方法,在各种数据集和不同的缓冲尺寸上,平均每种类型的精度增长了4.81%。
translated by 谷歌翻译
人类智慧的主食是以不断的方式获取知识的能力。在Stark对比度下,深网络忘记灾难性,而且为此原因,类增量连续学习促进方法的子字段逐步学习一系列任务,将顺序获得的知识混合成综合预测。这项工作旨在评估和克服我们以前提案黑暗体验重播(Der)的陷阱,这是一种简单有效的方法,将排练和知识蒸馏结合在一起。灵感来自于我们的思想不断重写过去的回忆和对未来的期望,我们赋予了我的能力,即我的能力来修改其重播记忆,以欢迎有关过去数据II的新信息II)为学习尚未公开的课程铺平了道路。我们表明,这些策略的应用导致了显着的改进;实际上,得到的方法 - 被称为扩展-DAR(X-DER) - 优于标准基准(如CiFar-100和MiniimAgeNet)的技术状态,并且这里引入了一个新颖的。为了更好地了解,我们进一步提供了广泛的消融研究,以证实并扩展了我们以前研究的结果(例如,在持续学习设置中知识蒸馏和漂流最小值的价值)。
translated by 谷歌翻译
持续学习(CL)调查如何在无需遗忘的情况下培训在任务流上的深网络。文献中提出的CL设置假设每个传入示例都与地面真实注释配对。然而,这与许多真实应用的冲突这项工作探讨了持续的半监督学习(CSSL):这里只有一小部分标记的输入示例显示给学习者。我们评估当前CL方法(例如:EWC,LWF,Icarl,ER,GDumb,Der)在这部小说和具有挑战性的情况下,过度装箱纠缠忘记。随后,我们设计了一种新的CSSL方法,用于在学习时利用度量学习和一致性正则化来利用未标记的示例。我们展示我们的提案对监督越来越令人惊讶的是,我们的提案呈现出更高的恢复能力,甚至更令人惊讶地,仅依赖于25%的监督,以满足全面监督培训的优于营业型SOTA方法。
translated by 谷歌翻译
Novelty detection is commonly referred to as the discrimination of observations that do not conform to a learned model of regularity. Despite its importance in different application settings, designing a novelty detector is utterly complex due to the unpredictable nature of novelties and its inaccessibility during the training procedure, factors which expose the unsupervised nature of the problem. In our proposal, we design a general framework where we equip a deep autoencoder with a parametric density estimator that learns the probability distribution underlying its latent representations through an autoregressive procedure. We show that a maximum likelihood objective, optimized in conjunction with the reconstruction of normal samples, effectively acts as a regularizer for the task at hand, by minimizing the differential entropy of the distribution spanned by latent vectors. In addition to providing a very general formulation, extensive experiments of our model on publicly available datasets deliver on-par or superior performances if compared to state-of-the-art methods in one-class and video anomaly detection settings. Differently from prior works, our proposal does not make any assumption about the nature of the novelties, making our work readily applicable to diverse contexts.
translated by 谷歌翻译
Linguists distinguish between novel and conventional metaphor, a distinction which the metaphor detection task in NLP does not take into account. Instead, metaphoricity is formulated as a property of a token in a sentence, regardless of metaphor type. In this paper, we investigate the limitations of treating conventional metaphors in this way, and advocate for an alternative which we name 'metaphorical polysemy detection' (MPD). In MPD, only conventional metaphoricity is treated, and it is formulated as a property of word senses in a lexicon. We develop the first MPD model, which learns to identify conventional metaphors in the English WordNet. To train it, we present a novel training procedure that combines metaphor detection with word sense disambiguation (WSD). For evaluation, we manually annotate metaphor in two subsets of WordNet. Our model significantly outperforms a strong baseline based on a state-of-the-art metaphor detection model, attaining an ROC-AUC score of .78 (compared to .65) on one of the sets. Additionally, when paired with a WSD model, our approach outperforms a state-of-the-art metaphor detection model at identifying conventional metaphors in text (.659 F1 compared to .626).
translated by 谷歌翻译
A widely acknowledged shortcoming of WordNet is that it lacks a distinction between word meanings which are systematically related (polysemy), and those which are coincidental (homonymy). Several previous works have attempted to fill this gap, by inferring this information using computational methods. We revisit this task, and exploit recent advances in language modelling to synthesise homonymy annotation for Princeton WordNet. Previous approaches treat the problem using clustering methods; by contrast, our method works by linking WordNet to the Oxford English Dictionary, which contains the information we need. To perform this alignment, we pair definitions based on their proximity in an embedding space produced by a Transformer model. Despite the simplicity of this approach, our best model attains an F1 of .97 on an evaluation set that we annotate. The outcome of our work is a high-quality homonymy annotation layer for Princeton WordNet, which we release.
translated by 谷歌翻译
Binarized Neural Networks (BNNs) are receiving increasing attention due to their lightweight architecture and ability to run on low-power devices. The state-of-the-art for training classification BNNs restricted to few-shot learning is based on a Mixed Integer Programming (MIP) approach. This paper proposes the BeMi ensemble, a structured architecture of BNNs based on training a single BNN for each possible pair of classes and applying a majority voting scheme to predict the final output. The training of a single BNN discriminating between two classes is achieved by a MIP model that optimizes a lexicographic multi-objective function according to robustness and simplicity principles. This approach results in training networks whose output is not affected by small perturbations on the input and whose number of active weights is as small as possible, while good accuracy is preserved. We computationally validate our model using the MNIST and Fashion-MNIST datasets using up to 40 training images per class. Our structured ensemble outperforms both BNNs trained by stochastic gradient descent and state-of-the-art MIP-based approaches. While the previous approaches achieve an average accuracy of 51.1% on the MNIST dataset, the BeMi ensemble achieves an average accuracy of 61.7% when trained with 10 images per class and 76.4% when trained with 40 images per class.
translated by 谷歌翻译